Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
You're currently offline. Some features may not work.
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🦙 Simple finetuning LLMs
Ollama
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
2717
posts in
49.3
ms
LlamaLib
: A cross-platform C++/C# library for local LLMs based on
llama.cpp
github.com
·
2d
·
Discuss:
Hacker News
⚙️
Finetuning LLMs faster with less memory
Thoughts
on LLMs
finestructure.co
·
11h
·
Discuss:
Hacker News
🔵
LLM frameworks and AI libraries for TypeScript
Oatmeal
-
Constraint
propagation for fun
eli.li
·
18h
·
Discuss:
Lobsters
,
Hacker News
📊
Vector Databases
[
RFC
PATCH v1 0/4] Machine Learning (
ML
) library in Linux kernel
lore.kernel.org
·
2d
·
Discuss:
Lobsters
,
Hacker News
🔵
LLM frameworks and AI libraries for TypeScript
Finding the needle in the
logstack
: Reducing LLM context with
TF-IDF
eliseomartelli.it
·
3d
🔵
LLM frameworks and AI libraries for TypeScript
Show HN:
Polymcp
and
Ollama
for Simple Local and Cloud LLM Execution
news.ycombinator.com
·
5d
·
Discuss:
Hacker News
⚙️
Finetuning LLMs faster with less memory
Unlocking core memories with
GoldSrc
engine and
CS
1.6 (2025)
danielbrendel.com
·
9h
·
Discuss:
Hacker News
⚙️
Finetuning LLMs faster with less memory
The Rise of Local Speech
Recognition
oatmealapp.com
·
3h
·
Discuss:
Hacker News
🔵
LLM frameworks and AI libraries for TypeScript
How I Program with LLMs
blog.wesleyabbey.io
·
4d
·
Discuss:
Hacker News
🤖
Coding Automation
Show HN: Model Training Memory
Simulator
czheo.github.io
·
12h
·
Discuss:
Hacker News
⚙️
Finetuning LLMs faster with less memory
Quantization-Aware
Distillation
ternarysearch.blogspot.com
·
19h
·
Discuss:
Hacker News
⚙️
Finetuning LLMs faster with less memory
Local Agent
Bench
: Test 11 small LLMs on tool-calling
judgment
, on CPU, no GPU
github.com
·
1d
·
Discuss:
Hacker News
,
r/LocalLLaMA
🔥
Svelte
Take Back the EM
Dash
spin.atomicobject.com
·
1d
·
Discuss:
Hacker News
⚙️
Finetuning LLMs faster with less memory
Achieving
Ultra-Fast AI Chat
Widgets
cjroth.com
·
23h
·
Discuss:
Hacker News
⚙️
Finetuning LLMs faster with less memory
LLMs Are Prediction
Machines
kaelandt.github.io
·
2h
·
Discuss:
Hacker News
🔵
LLM frameworks and AI libraries for TypeScript
Build a
Compiler
in Five Projects
kmicinski.com
·
1d
🤖
Coding Automation
Optimized
LLM Inference
Engines
rishirajacharya.com
·
4d
⚙️
Finetuning LLMs faster with less memory
Writing an LLM from scratch, part
32c
– Interventions: removing
dropout
gilesthomas.com
·
2d
·
Discuss:
Hacker News
⚙️
Finetuning LLMs faster with less memory
NotebookLM
: The AI that only
learns
from you
byandrev.dev
·
1d
·
Discuss:
Hacker News
🔄
AI Pipeline design and techniques
EBM
vs. LLMs: Our
Kona
EBM
a 96% vs. 2% Sudoku Benchmark
logicalintelligence.com
·
2d
·
Discuss:
Hacker News
⚙️
Finetuning LLMs faster with less memory
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help